Goto

Collaborating Authors

 moral issue


GPT-4's One-Dimensional Mapping of Morality: How the Accuracy of Country-Estimates Depends on Moral Domain

Strimling, Pontus, Krueger, Joel, Karlsson, Simon

arXiv.org Artificial Intelligence

Prior research demonstrates that Open AI's GPT models can predict variations in moral opinions between countries but that the accuracy tends to be substantially higher among high-income countries compared to low-income ones. This study aims to replicate previous findings and advance the research by examining how accuracy varies with different types of moral questions. Using responses from the World Value Survey and the European Value Study, covering 18 moral issues across 63 countries, we calculated country-level mean scores for each moral issue and compared them with GPT-4's predictions. Confirming previous findings, our results show that GPT-4 has greater predictive success in high-income than in low-income countries. However, our factor analysis reveals that GPT-4 bases its predictions primarily on a single dimension, presumably reflecting countries' degree of conservatism/liberalism. Conversely, the real-world moral landscape appears to be two-dimensional, differentiating between personal-sexual and violent-dishonest issues. When moral issues are categorized based on their moral domain, GPT-4's predictions are found to be remarkably accurate in the personal-sexual domain, across both high-income (r = .77) and low-income (r = .58) countries. Yet the predictive accuracy significantly drops in the violent-dishonest domain for both high-income (r = .30) and low-income (r = -.16) countries, indicating that GPT-4's one-dimensional world-view does not fully capture the complexity of the moral landscape. In sum, this study underscores the importance of not only considering country-specific characteristics to understand GPT-4's moral understanding, but also the characteristics of the moral issues at hand.


Responsibility assignment won't solve the moral issues of artificial intelligence

#artificialintelligence

Overview: The multitude of AI ethics guidelines published over the last 10 years take for granted certain buzzwords, such as'responsibility', without due inspection into their philosophical foundations and whether they are actually fit for purpose. This paper offers a challenge to the notion that'responsibility' is suitable and sufficient to AI ethics work. We have all seen the AI ethics buzzwords by now: 'explainability', 'transparency', and the big one, 'responsibility'. But what do these buzzwords offer in practice? This paper challenges the notion that AI ethicists can gain anything meaningful from employing the catch-all term'responsibility'. Responsibility is disassembled into differentiated parts, 'accountability', 'liability', and'praise and blameworthiness', which each offer unique insights into the ethical challenges AI poses.


A Guided Tour of AI and the Murky Ethical Issues It Raises

#artificialintelligence

As I read Melanie Mitchell's "Artificial Intelligence: A Guide for Thinking Humans," I found myself recalling John Updike's 1986 novel "Roger's Version.'' One of its characters, Dale, is determined to use a computer to prove the existence of God. Dale's search leads him into a mind-bending labyrinth where religious-metaphysical questions overwhelm his beloved technology and leave the poor fellow discombobulated. I sometimes had a similar experience reading "Artificial Intelligence." In Mitchell's telling, artificial intelligence (AI) raises extraordinary issues that have disquieting implications for humanity. AI isn't for the faint of heart, and neither is this book for nonscientists. To begin with, artificial intelligence -- "machine thinking," as the author puts it -- raises a pair of fundamental questions: What is thinking and what is intelligence? Since the end of World War II, scientists, philosophers, and scientist-philosophers (the two have often seemed to merge during the past 75-odd years) have been grappling with those very questions, offering up ideas that seem to engender further questions and profound moral issues. Mitchell, a computer science professor at Portland State University and the author of "Complexity: A Guided Tour," doesn't resolve these questions and issues -- she as much acknowledges that they are irresolvable at present -- but provides readers with insightful, common-sense scrutiny of how these and related topics pervade the discipline of artificial intelligence. Mitchell traces the origin of modern AI research to a 1956 Dartmouth College summer study group: its members included John McCarthy (who was the group's catalyst and coined the term artificial intelligence); Marvin Minsky, who would become a noted artificial intelligence theorist; cognitive scientists Herbert Simon and Allen Newell; and Claude Shannon ("the inventor of information theory"). Mitchell describes McCarthy, Minsky, Simon, and Newell as the "big four'' pioneers of AI.


Please stop worrying that driverless cars would run over kids

#artificialintelligence

I mean it.Because a few days ago I saw a discussion on Twitter (*) that ran more or less like this: The "enormous moral issues" described in that Twitter thread are one instance of the "thought experiment in ethics" known as "Trolley Problem", also defined, by the source for the image above, "the Internet's Most Philosophical Meme": The problem is that, when speaking of driverless cars, the whole "trolley problem" approach consists, in a very non-negligible part, of barking up the wrong tree. The problem is not that the "moral issues that can never be understood by an AI" are not enormous. "how can we keep buying and using PRIVATE, or even shared, driverless cars in the SAME cities as today? The same cities that every serious forecast says will host a larger percentage of human population every year?" That is not a question, or a problem, worth of high priority.